perm filename DENNET[E88,JMC]1 blob sn#861117 filedate 1988-09-14 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	What follows are some partial notes on ``The Intentional Stance'' by
C00038 ENDMK
C⊗;
What follows are some partial notes on ``The Intentional Stance'' by
Daniel Dennett.  Part way through reading the book, it occurred to me
that AI (and philosophy also) could benefit by taking the design
stance more seriously.  Namely, we should try to devise a language,
presumably first order, for solving some of the puzzles pointed at
by Dennett, especially those in the rather long ``Beyond Belief.''

	The key problem is epistemological adequacy.  Dennett describes
many epistemological situations.  We need to express the beliefs
of the subjects in these situations.

dennet[e88,jmc]		Dennett's "The intentional stance"

20 confuses the limited specificity of a desire for food with the
greater specificity needed in a restaurant order.  Once the order has
been given, there is a new desire that it be executed as ordered.

29 ... all there is to being a true believer is being a system whose
behavior is reliably predictable via the intentional strategy,
and hence {\it all there is} to really and truly believing that $p$
(for any proposition $p$) is being an intentional system for which $p$
occurs as a belief in the best (most predictive) interpretation."

Usually we want more than predictability; we need ability to quantify,
e.g. we need to be able to say that no matter what I say, I can't
convince him that 2 + 2 = 5.  It seems to be a common error to suppose
that science is merely after predictability.  One could be able to
predict the paths of the planets with arbitrary accuracy for arbitrarily
long times and still not know about the conservation laws of energy,
momentum and angular momentum.

32 It is not that we attribute (or should attribute) beliefs and desires
only to things in which we find internal representations, but rather
that when we discover some object for which the intentional strategy works,
we endeavor to interpret some of its internal states or processes as internal
representations.  What makes some internal feature of a thing a representation
could only be its role in regulating the behavior of an intentional system.

74 Marr's 3 levels are (1) describing the function, (2) describing
the algorithm, (3) describing the hardware that realizes the
algorithm.  Evidently there are levels more abstract than (1), e.g.
giving the considerations on which the function is based.

125 It seems to me that Dennett makes insufficient use of the design
stance --- less that he did in Brainstorms.  For example, he doesn't
discuss how one might build a system in which propositions might
play a causal role.

127 Suppose that XYZ differs from H2O in some exotic property,
that has never come up on Earth or Twin Earth.  The Earth man
and the Twin Earth man would be in for a surprise if they
communicated and then encountered the exotic property.
Two people on earth are in for a surprise if they are using
the same word in different ways, but haven't, even after
much mutual interaction, encountered a situation where it
makes a difference.  Fortunately, this happens sufficiently
rarely as not to refute our beliefs that we understand words
in the same way.  It happens to children all the time, and
they are prepared to correct their beliefs about the meanings
of particular words.
It is relevant that the laws of chemistry don't seem to allow
any other substance that can play the role of Putnam's XYZ.
If many such substitutions occurred, our ability to assign
meanings to words would be jeopardized.

122 Dennett gives desiderata (a), (b) and (c) for propositions and
remarks that these don't seem to be simultaneously realizable.
As I remarked above Dennett may be criticized for not taking
the design stance seriously enough.  Here we have a question.
Can we usefully build into a robot propositions that satisfy
Dennett's (abc)?

	Satisfying (a) requires that the robot's
propositions have definite truth values.  This seems to mean
that we, the builders of the robot, can say whether each is
true.  If the robot was built on Earth and can recognize
water, it will erroneously say ``This is water'' if transported,
unbeknownst to it, to Putnam's Twin
Earth and confronted with Putnam's XYZ.  This doesn't bother
the robot builders for several reasons. (1) Twin Earth is
impossible.  There is no other substance that behaves just
like water.  The laws of chemistry don't permit it.  A trifle
to a philosopher but comforting to the engineer.  (2) By
Putnam's hypothesis, it would achieve its goals on Twin
Earth, since XYZ behaves like water.

	Maybe Twin Earth is too pure an example.  Perhaps other
examples exist that can arise.  In this case, our robot might be
brittle.  Moved to an environment to which people could adapt after
initial confusion, maybe it couldn't.  It seems to me that we can fix
this using contexts.  All the facts we build into it are relative to a
certain context C0.  As long as it stays where C0 is valid, it behaves
just as if the facts were asserted absolutely.  If it is put in a
context where these propositions are not all true and gets a
contradiction, it uses its nonmonotonic upward inheritance rules to
preserve a collection of its beliefs that is maximal relative to its
new experience.

	Dennett's (c) is ``It is graspable by the mind''.
Unless I misunderstand him, this can be assured by representing
the beliefs by sentences in the memory of the computer provided
we are also successful in making sure that the beliefs
explicitly represented form what he earlier calls ``core
beliefs'' and they and their consequences also fit his
criteria for being the ``most predictive'' set of beliefs.

	(b) is that the propositions be built up from intensions.
I don't (sep 10) understand what this is.  Aha, p. 131 suggests
that it is the requirement that the belief that it is desirable
to marry Jocasta be different from the belief that it is
desirable to marry Oedipus's mother.  It seems to me that
this condition will be satisfied if the language of my
First Order Theory of Concepts and Individual Propositions
is used for expressing beliefs about beliefs and other
intentional entities.

	Doubtless careful reading of Dennett's book and the
writings of other critics of (abc) would determine limitations
of the usefulness of our robot.  Probably these limitations
would manifest themselves in brittleness.  It would be interesting
if any particular extension of environment could be handled
by using contexts and preserving (abc)

134 To what extent can cryptography save Descartes?  He has a complex
structure of beliefs, some of which are labelled in a particular way
that the careless observer will identify as the label for past
observations.  The careful observer (Putnam) will doubt whether
they warrant this label and whether other labels on belief tokens warrant
the interpretations the naive observer proposes to give them.  However,
Putnam will not be able to think of any other interpretation that
makes sense of the whole complex structure, and the cryptographer
abetted by Kolmogorov and Chaitin will assure Putnam that it is
very unlikely (i.e. less probable than that Putnam will die
from the oxygen molecules all rushing away from his lungs) that
another predictive interpretation is possible.

136 The one ``cheap trick''  Dennett actually cites won't work.  If the
autobiography includes the sentence, ``I said, `I won't go.''',
then uniformly replacing ``I'' by ``he'' changes the meaning.  We'd
need ``He said, `I won't go''' to preserve the meaning.  The fact
that ``I'' is the first person singular is confirmed by many
features of the way it is used in an extended text.
Shannon (1948) gives lots of evidence on this point.

145 The fact that a signal is on a certain line or neuron replaces
some of its syntax.  Syntax might be replaced by temporal sequence,
i.e. by putting it in a time slot.

	Conversely, suppose that a thermostat is to work on a buss,
i.e. it is to accept temperature signals addressed to it from one or
more thermometers and address signals to the furnace.  Suppose further
that the furnace can accept a variety of signals, e.g. it can refill
its water tank as well as turn on or off.  Then the message sent by
the thermostat must include information distinguishing it from other
messages the furnace may receive.  Suppose further that the whole
system isn't designed at once.  We may want to provide for more kinds
of messages to new kinds of furnaces.  All this forces more and more
syntax and semantics into the language used by the thermostat without
even complicating its decisions.  As discussed elsewhere we also can
go in that direction.

	We can expect that the brain uses spacial and
temporal slots and also syntax, and that these mechanisms
are mixed in a way determined by evolution.

	There seem to be two key points at which it seems that slots
and wires must give way to genuine syntax.

(1) when we have compositionality --- the ability to construct
arbitrarily deep expressions.

(2) when we have introspection.

When looking for primitive forms of this, perhaps we should start
with the genetic code.  Maybe it's used directly.  Certainly it
encodes behavior.  The key mechanism might be some kind of
gensym().  Two (or more) copies of a new token are created
and one is sent off one way and the other another way.  They
can match later and the matching can trigger an event.  The
immune mechanism is another key --- perhaps not entirely
different.

Sep 12
167 Devise a language that will do for Tom, who has been slipped a Mickey
Finn and moved from Shakey's Pizza parlor in Costa Mesa to the similar
Shakey's in Westwood.  Tom makes various errors of reference because
he doesn't know of the move.  When Tom is told of the move he reorients
himself promptly.  Seemingly he needs only one fact and doesn't have
to revise a large number of sentences.  We need to design Tom's language
so that his reorientation is accomplished by revising as few sentences
as possible in his memory.  That this should be possible is a design
constraint for our language of contexts.

182 The distinctions between beliefs de re and de dicto don't seem to
allow for mixed beliefs.  ``Tom wants to have an affair with Bill's wife''
may be de re about Bill and de dicto about his wife.

186 The child believes that Santa Claus is coming on that night.  While
Santa Claus doesn't exist, the child is correct in his inference that
some goodies will turn up in his stocking.

188 Suppose Tom knows what mammals are, e.g. he has had the mammalian
features of specific animals pointed out to him.  He knows nothing of
dugongs but hears that they are mammals.  Nevertheless, if a specific
animal is pointed out to him as a dugong, he is prepared to look for
its mammalian features, and he is disinclined to look for dugong eggs
(even if he knows about platypuses, since platypuses were mentioned
as exceptional).  He might more readily be inclined to accept a job
milking dugongs.  This means that he knows something about dugongs.

200 ``There still remains a grain of truth, however, in Russell's idea of a
special relation between a believer and some of the things he thinks
with, but to discuss this issue, one must turn to notional attitude
psychology and more particularly to the `engineering' question of
how to design a cognitive creature with the sort of notional world we
typically have.  In that domain, now protected from some of the misleading
doctrines about the mythic phenomena of de re and de dicto
beliefs, phenomenological distinctions emerge that cut across the
attempted boundaries of the recent philosophical literature and hold
some promise of yielding insights about cognitive organization.''

216 Dennett discusses a difference between explicit and implicit
representation of assertions.  The explicit representation may
correspond to the representation of verbally received information.
However, verbally received info often contains descriptions that
are replaced by internal designators.  For example, ``She married
the Harvard professor she was dating'' may be replaced internally
by something like ``She married Joe'' if the hearer knows Joe.
Of course, the internal name may not be Joe, as might be evidenced
by a name block that wouldn't prevent other facts about Joe
from be cited.

This suggests a digression into psychology.  Everyone has the
experience of forgetting the name of a person well known to himself.
However, it seems to me that names are specially vulnerable in
this respect and are more readily blocked than much other
information.  One never forgets the sex of an acquaintance,
it would seem.

Comparing psychological introspection with mathematical logic,
we can ask how much structure is there evidence for in memories.
Suppose one is informed

``She married a tall Harvard professor of physical chemistry that
she met skiing but who couldn't swim.''  There is no reason to
suppose that anything like a parsed string is stored.  It seems
introspectively more plausible to me that the psychological
analog of gensym() is done and all the properties of the
person whom she married are attached to the resulting new
entity.  We could check this by asking whether anything about
the order in which the professor's properties were stated
is long remembered.  We need to see if this works for sentences
that include universal statements, ``No-one can deter Daniel once
he makes up his mind.''

People forget the sex of babies.

The gensym() theory might be refuted if people often conflate
two people whom they separately know well.

216 It is not clear whether semantic nets come under Dennett's notion
of explicit representation.

217 Explicitness is required for communication.  Exactly what kind
of explicitness is required is a good question.  One can't
transmit  p∧q  by transmitting  p≡q  and  p∨q  without transmitting
p  and  q  themselves as constituents of  p≡q  and  p∨q.  On the
other hand, once an object is created, it can be designated in communication
by a great number of descriptions.  Also the commutativities
are much used in communication.

218 The fact that there has to be a system at the bottom with know-how
is confused by most people, because that system can itself be understood.
Consider the Hofstadter dialog between Achilles and the tortoise.
(Or is this one Lewis Carroll himself?)   As soon as the tortoise
tries to drive Achilles back on his justification for modus ponens,
Achilles should have replied, ``I do modus ponens, because that's
the way I'm built.  However, if you like, I'll do as many verbal
steps of metalinguistic regress as we both have time for.''

219 We AI types think that explicit use of rules will work ---
provided resolution is built in.  Resolution is just modus ponens
generalized to include variables.

222 A person doesn't follow the original recipe he received for
tying a knot, e.g. tying ones shoes.  He doesn't have to, because
he has built up a set of pattern action rules that determine the
next step at each point in the sequence on the basis of
kinaesthetic and visual perceptions.  These rules have to be
learned by practice, because we don't have a language for
describing them, don't have the ability to observe them well
in ourselves or the ability to learn them from other people's
descriptions.  For example, when I am about to make a turn
in my car, I usually turn the wheel almost exactly the right
amount for the curvature of the turn and hold that amount
till the turn is complete.  I cannot tell someone I am
teaching to drive the right amount.

227 Dennett misses the real challenge to connectionist models ---
how to recognize patterns, e.g. linguistic patterns.  Or even
classifying all the rooms in a house.  How to get sequential
machines out of the purely combinatorial structure of
connectionist systems.

243 I wonder if Dennett thinks he is doing something analogous
to higher order logic when he describes the levels of embeddings
of sentences within sentences.  Higher order logic would be
something like, ``He wishes wishing were enough to achieve
what is wished''.

243 Follow up references to Bennett and Grice.

243 Infinite order in Dennett's sense isn't actually very difficult.
``I want him to believe this sentence.''

Like all philosophers, Dennett spends more time arguing with his
fellow philosophers than in working on the problems.

302 If the frog snaps at a lead pellet, did the dark object detector
mistakenly say it was a fly or did it function correctly and the brain
make the mistake, or should we merely interpret flies as dark object
eaters.  Perhaps the genes made the mistake.  My view is that you
can have it however you like, but if you interpret the frog as a
dark object eater, your intentional model explains less about the
frog than a model that ascribes a desire to satisfy hunger to the system.

If we take the design stance, it is legitimate to ask whether the
ability to discriminate against lead pellets is useful in some future
environment, and whether it is better accomplished by improving the
retinal processing or by adding a subroutine to the brain.  If we
accomplish the latter then the preferred interpretation of the retinal
mechanism is definitely dark-object-detector.  In fact, the question
may be answerable if the frog's brain as it is now rejects some
dark spots passed by the retinal mechanisms.

313 The natives use the word glug for combustible swamp gas, namely
methane, because that's the only combustible gas occurring in their
swamps.  Dennett asks whether they are mistaken to use the word
when they encounter acetylene, and concludes that there is no
fact of the matter about whether glug includes just methane or
all combustible gases.  Taking the design stance merely requires
that the natives make a distinction if this becomes relevant to
their lives because of their purchase of cooking stoves.

Historically, the word air was used for a while for all gases.
According to the OED the word gas was invented by
the Dutch chemist J. B. Van Helmont. (1577-1644).  He used it
for an occult principle supposed to be contained in all bodies,
and regarded by him as an ultra-rarefied condition of water.
It didn't get its present meaning of applying to all gases till
about 1798.  This is an example common in science that the
need for a word depends on discovery.  From the point of view
that follows the discovery, previous uses may be excessively
specialized, excessively general, cutting across the natural
categories, or just inconsistent.

319 The "Intentional Fallacy" in literary interpretation is
asking the author what he had in mind.  Doubtless there are
variations in the extreme to which this doctrine is carried.

327 The proposition that parallel hardware is needed is
unproved.  Dennett misses the main intuition behind
Searle's Chinese room example.  The question of interpretation.

342 Normative principle: attribute to a creature the propositional
attitudes it ``ought to have'' given its circumstances
Projective principle(s): attribute to a creature the propositional
attitudes one supposed one would have oneself under those
circumstances.

343 ``project ourselves into his ... state of mind'' - Quine

344 ``Casting our real selves thus into  in unreal roles, we do not
generally know how much reality to hold constant.  Quandaries arise.''
(Quine 1960, p. 219)

Generalities:

	The laws of physics allow in principle computing the response
of a physical system to arbitrary external forces.  The specifications
of engineering systems such as electronic circuits often allow the
prediction of behavior only for inputs subject to certain conditions.
When we take the intentional stance, the behavior of a system is often
predictable only for ``proper'' sequences of inputs.  Before discussing
this for intentional systems, let's consider an electronic system, the
JK flipflop.

	The JK flipflop is a device that can be used to store one bit
of information in a computer or other electronic device.  It has, besides
its power and ground connections, 3 input and two output terminals.
The manufacturer of a JK flipflop provides a description sheet specifying
its behavior.  However, this sheet specifies behavior only if the signals
in and the connections out meet certain specifications.  If the device
is used in other ways, the manufacturer doesn't say what will happen.
He doesn't guarantee that two units will behave the same, doesn't
guarantee not to make unannounced changes in the product that affect its
behavior when it is used contrary to specifications and doesn't even
guarantee that improper use won't break it.

	Here are some of the specifications.

	(1) The power and ground terminals must be properly connected
to a proper voltage source with not too large an impedance and not too
much noise.  The temperature must be kept within specified limits.
The description may not say it, but the device may malfunction in
space because of too much ionizing radiation.

	(2) The voltages on the J and K inputs should be changed only
when the clock input is off.  They should reach steady values sufficiently
before the clock input goes on.

	(3) When the clock is turned on exactly one of the J and K inputs
should be on and the other should be off.

	(4) The clock should stay on for at least a specified time, and
during this time the J and K inputs should hold their states.

	If these conditions are met, the manufacturer specifices that
when the clock is turned off the J and K output terminals
will hold voltages corresponding to their respective inputs.  These voltages
will remain as long as the clock stays off independently of changes
in the J and K inputs.

	This can be regarded as a description of the
flipflop at a (semi) logical level.  It is what is needed for designing
computers.  Additional information at the physical level is needed only
for dealing with malfunctions and perhaps for making sure that when
power  is turned on, the system ends up in a determinate state.

	Analogously, the behavior of an intentional system is predictable
by computations with its intentional state only if it is in a proper
intentional state to begin with, and its inputs meet certain specifications.